1,217 research outputs found

    Formal Verification and Fault Mitigation for Small Avionics Platforms using Programmable Logic

    Get PDF
    As commercial and personal unmanned aircraft gain popularity and begin to account for more traffic in the sky, the reliability and integrity of their flight controllers becomes increasingly important. As these aircraft get larger and start operating over longer distances and at higher altitude they will start to interact with other controlled air traffic and the risk of a failure in the control system becomes much more severe. As any engineer who has investigated any space bound technology will know, digital systems do not always behave exactly as they are supposed to. This can be attributed to the effects of high energy particles in the atmosphere that can deposit energy randomly throughout a digital circuit. These single event effects are capable of producing transient logic levels and altering the state of registers in a circuit, corrupting data and possibly leading to a failure of the flight controller. These effects become more common as altitude increases, as well as with the increase of registers in a digital system. High integrity flight controllers also require more development effort to show that they meet the required standard. Formal methods can be used to verify digital systems and prove that they meet certain specifications. For traditional software systems that perform many tasks on shared computational resources, formal methods can be quite difficult if not impossible to implement. The use of discrete logic controllers in the form of FPGAs greatly simplifies multitasking by removing the need for shared resources. This simplicity allows formal methods to be applied during the development of the flight control algorithms & device drivers. In this thesis we propose and demonstrate a flight controller implemented entirely within an FPGA to investigate the differences and difficulties when compared with traditional CPU software implementations. We go further to provide examples of formal verifications of specific parts of the flight control firmware to demonstrate the ease with which this can be achieved. We also make efforts to protect the flight controller from the effects of radiation at higher altitudes using both passive hardware design and active register transfer level algorithms

    Skill transfer, expertise and talent development: An ecological dynamics perspective

    Get PDF
    In this paper, we propose an ecological dynamics perspective on expertise and talent development, with a focus on the role of skill transfer. The ecological dynamics theoretical framework provides an integrated explanation for human behaviour in sport, predicated on a conceptualisation including constraints on dynamical systems, ecological psychology and a complex systems approach in neurobiology. Three main pillars are presented, individual-environment coupling as the smallest unit of analysis; adaptation of a complex dynamical system to interacting constraints; and the regulation of action with perception) in order to discuss the functional role of behavioural variability, the usefulness of perceptual-motor exploration and the importance of general and specific skill transfer in the development of talent and expertise in athletes. In addition, practical implications for coaches and instructors are discussed, notably regarding early diversification and unstructured play and activities in talent development programs, but also through variable practice and constraints manipulation

    A Supersymmetric Theory of Flavor and R Parity

    Full text link
    We construct a renormalizable, supersymmetric theory of flavor and RR parity based on the discrete flavor group (S3)3(S_3)^3. The model can account for all the masses and mixing angles of the Standard Model, while maintaining sufficient squark degeneracy to circumvent the supersymmetric flavor problem. By starting with a simpler set of flavor symmetry breaking fields than we have suggested previously, we construct an economical Froggatt-Nielsen sector that generates the desired elements of the fermion Yukawa matrices. With the particle content above the flavor scale completely specified, we show that all renormalizable RR-parity-violating interactions involving the ordinary matter fields are forbidden by the flavor symmetry. Thus, RR parity arises as an accidental symmetry in our model. Planck-suppressed operators that violate RR parity, if present, can be rendered harmless by taking the flavor scale to be 8×1010\lesssim 8 \times 10^{10} GeV.Comment: 28 pp. LaTeX, 1 Postscript Figur

    A roadmap for the future of crowd safety research and practice: Introducing the Swiss Cheese Model of Crowd Safety and the imperative of a Vision Zero target

    Get PDF
    Crowds can be subject to intrinsic and extrinsic sources of risk, and previous records have shown that, in the absence of adequate safety measures, these sources of risk can jeopardise human lives. To mitigate these risks, we propose that implementation of multiple layers of safety measures for crowds—what we label The Swiss Cheese Model of Crowd Safety—should become the norm for crowd safety practice. Such system incorporates a multitude of safety protection layers including regulations and policymaking, planning and risk assessment, operational control, community preparedness, and incident response. The underlying premise of such model is that when one (or multiple) layer(s) of safety protection fail(s), the other layer(s) can still prevent an accident. In practice, such model requires a more effective implementation of technology, which can enable provision of real-time data, improved communication and coordination, and efficient incident response. Moreover, implementation of this model necessitates more attention to the overlooked role of public education, awareness raising, and promoting crowd safety culture at broad community levels, as one of last lines of defence against catastrophic outcomes for crowds. Widespread safety culture and awareness has the potential to empower individuals with the knowledge and skills that can prevent such outcomes or mitigate their impacts, when all other (exogenous) layers of protection (such as planning and operational control) fail. This requires safety campaigns and development of widespread educational programs. We conclude that, there is no panacea solution to the crowd safety problem, but a holistic multi-layered safety system that utilises active participation of all potential stakeholders can significantly reduce the likelihood of disastrous accidents. At a global level, we need to target a Vision Zero of Crowd Safety, i.e., set a global initiative of bringing deaths and severe injuries in crowded spaces to zero by a set year

    Changes in adductor strength after competition in Academy Rugby Union Players

    Get PDF
    © 2016 National Strength and Conditioning Association. This study determined the magnitude of change in adductor strength after a competitive match in academy rugby union players and examined the relationship between locomotive demands of match-play and changes in postmatch adductor strength. A withinsubject repeated measures design was used. Fourteen academy rugby union players (age, 17.4 ± 0.8 years; height, 182.7 ± 7.6 cm; body mass, 86.2 ± 11.6 kg) participated in the study. Each player performed 3 maximal adductor squeezes at 458 of hip flexion before and immediately, 24, 48, and 72 hours postmatch. Global positioning system was used to assess locomotive demands of match-play. Trivial decreases in adductor squeeze scores occurred immediately (21.3 ± 2.5%; effect size [ES] = 20.11 ± 0.21; likely, 74%) and 24 hours after match (20.7 ± 3%; ES = 20.06 ± 0.25; likely, 78%), whereas a small but substantial increase occurred at 48 hours (3.8 ± 1.9%; ES = 0.32 ± 0.16; likely, 89%) before reducing to trivial at 72 hours after match (3.1 ± 2.2%; ES = 0.26 ± 0.18; possibly, 72%). Large individual variation in adductor strength was observed at all time points. The relationship between changes in adductor strength and distance covered at sprinting speed (VO2max 81%) was large immediately postmatch (p = 0.056, r = 20.521), moderate at 24 hours (p = 0.094, r = 20.465), and very large at 48 hours postmatch (p = 0.005, r = 20.707). Players who cover greater distances sprinting may suffer greater adductor fatigue in the first 48 hours after competition. The assessment of adductor strength using the adductor squeeze test should be considered postmatch to identify players who may require additional rest before returning to field-based training

    Safety, immunogenicity, and reactogenicity of BNT162b2 and mRNA-1273 COVID-19 vaccines given as fourth-dose boosters following two doses of ChAdOx1 nCoV-19 or BNT162b2 and a third dose of BNT162b2 (COV-BOOST): a multicentre, blinded, phase 2, randomised trial

    Get PDF

    Differential cross section measurements for the production of a W boson in association with jets in proton–proton collisions at √s = 7 TeV

    Get PDF
    Measurements are reported of differential cross sections for the production of a W boson, which decays into a muon and a neutrino, in association with jets, as a function of several variables, including the transverse momenta (pT) and pseudorapidities of the four leading jets, the scalar sum of jet transverse momenta (HT), and the difference in azimuthal angle between the directions of each jet and the muon. The data sample of pp collisions at a centre-of-mass energy of 7 TeV was collected with the CMS detector at the LHC and corresponds to an integrated luminosity of 5.0 fb[superscript −1]. The measured cross sections are compared to predictions from Monte Carlo generators, MadGraph + pythia and sherpa, and to next-to-leading-order calculations from BlackHat + sherpa. The differential cross sections are found to be in agreement with the predictions, apart from the pT distributions of the leading jets at high pT values, the distributions of the HT at high-HT and low jet multiplicity, and the distribution of the difference in azimuthal angle between the leading jet and the muon at low values.United States. Dept. of EnergyNational Science Foundation (U.S.)Alfred P. Sloan Foundatio

    Optimasi Portofolio Resiko Menggunakan Model Markowitz MVO Dikaitkan dengan Keterbatasan Manusia dalam Memprediksi Masa Depan dalam Perspektif Al-Qur`an

    Full text link
    Risk portfolio on modern finance has become increasingly technical, requiring the use of sophisticated mathematical tools in both research and practice. Since companies cannot insure themselves completely against risk, as human incompetence in predicting the future precisely that written in Al-Quran surah Luqman verse 34, they have to manage it to yield an optimal portfolio. The objective here is to minimize the variance among all portfolios, or alternatively, to maximize expected return among all portfolios that has at least a certain expected return. Furthermore, this study focuses on optimizing risk portfolio so called Markowitz MVO (Mean-Variance Optimization). Some theoretical frameworks for analysis are arithmetic mean, geometric mean, variance, covariance, linear programming, and quadratic programming. Moreover, finding a minimum variance portfolio produces a convex quadratic programming, that is minimizing the objective function ðð¥with constraintsð ð 𥠥 ðandð´ð¥ = ð. The outcome of this research is the solution of optimal risk portofolio in some investments that could be finished smoothly using MATLAB R2007b software together with its graphic analysis
    corecore